Goto

Collaborating Authors

 project description


GraphRank Pro+: Advancing Talent Analytics Through Knowledge Graphs and Sentiment-Enhanced Skill Profiling

Velampalli, Sirisha, Muniyappa, Chandrashekar

arXiv.org Artificial Intelligence

The extraction of information from semi-structured text, such as resumes, has long been a challenge due to the diverse formatting styles and subjective content organization. Conventional solutions rely on specialized logic tailored for specific use cases. However, we propose a revolutionary approach leveraging structured Graphs, Natural Language Processing (NLP), and Deep Learning. By abstracting intricate logic into Graph structures, we transform raw data into a comprehensive Knowledge Graph. This innovative framework enables precise information extraction and sophisticated querying. We systematically construct dictionaries assigning skill weights, paving the way for nuanced talent analysis. Our system not only benefits job recruiters and curriculum designers but also empowers job seekers with targeted query-based filtering and ranking capabilities.


Cracking the Code: Enhancing Development finance understanding with artificial intelligence

Beaucoral, Pierre

arXiv.org Artificial Intelligence

Analyzing development projects is crucial for understanding donors aid strategies, recipients priorities, and to assess development finance capacity to adress development issues by on-the-ground actions. In this area, the Organisation for Economic Co-operation and Developments (OECD) Creditor Reporting System (CRS) dataset is a reference data source. This dataset provides a vast collection of project narratives from various sectors (approximately 5 million projects). While the OECD CRS provides a rich source of information on development strategies, it falls short in informing project purposes due to its reporting process based on donors self-declared main objectives and pre-defined industrial sectors. This research employs a novel approach that combines Machine Learning (ML) techniques, specifically Natural Language Processing (NLP), an innovative Python topic modeling technique called BERTopic, to categorise (cluster) and label development projects based on their narrative descriptions. By revealing existing yet hidden topics of development finance, this application of artificial intelligence enables a better understanding of donor priorities and overall development funding and provides methods to analyse public and private projects narratives.


Using Large Language Models for a standard assessment mapping for sustainable communities

Jonveaux, Luc

arXiv.org Artificial Intelligence

This paper presents a new approach to urban sustainability assessment through the use of Large Language Models (LLMs) to streamline the use of the ISO 37101 framework to automate and standardise the assessment of urban initiatives against the six "sustainability purposes" and twelve "issues" outlined in the standard. The methodology includes the development of a custom prompt based on the standard definitions and its application to two different datasets: 527 projects from the Paris Participatory Budget and 398 activities from the PROBONO Horizon 2020 project. The results show the effectiveness of LLMs in quickly and consistently categorising different urban initiatives according to sustainability criteria. The approach is particularly promising when it comes to breaking down silos in urban planning by providing a holistic view of the impact of projects. The paper discusses the advantages of this method over traditional human-led assessments, including significant time savings and improved consistency. However, it also points out the importance of human expertise in interpreting results and ethical considerations. This study hopefully can contribute to the growing body of work on AI applications in urban planning and provides a novel method for operationalising standardised sustainability frameworks in different urban contexts.


Can We Trust AI Agents? An Experimental Study Towards Trustworthy LLM-Based Multi-Agent Systems for AI Ethics

de Cerqueira, José Antonio Siqueira, Agbese, Mamia, Rousi, Rebekah, Xi, Nannan, Hamari, Juho, Abrahamsson, Pekka

arXiv.org Artificial Intelligence

Ethical AI development is crucial as new technologies and concerns emerge, but objective, practical ethical guidance remains debated. This study examines LLMs in developing ethical AI systems, assessing how trustworthiness-enhancing techniques affect ethical AI output generation. Using the Design Science Research (DSR) method, we identify techniques for LLM trustworthiness: multi-agents, distinct roles, structured communication, and multiple rounds of debate. We design the multi-agent prototype LLM-BMAS, where agents engage in structured discussions on real-world ethical AI issues from the AI Incident Database. The prototype's performance is evaluated through thematic analysis, hierarchical clustering, ablation studies, and source code execution. Our system generates around 2,000 lines per run, compared to only 80 lines in the ablation study. Discussions reveal terms like bias detection, transparency, accountability, user consent, GDPR compliance, fairness evaluation, and EU AI Act compliance, showing LLM-BMAS's ability to generate thorough source code and documentation addressing often-overlooked ethical AI issues. However, practical challenges in source code integration and dependency management may limit smooth system adoption by practitioners. This study aims to shed light on enhancing trustworthiness in LLMs to support practitioners in developing ethical AI-based systems.


Chattronics: using GPTs to assist in the design of data acquisition systems

Brown, Jonathan Paul Driemeyer, Weber, Tiago Oliveira

arXiv.org Artificial Intelligence

The usefulness of Large Language Models (LLM) is being continuously tested in various fields. However, their intrinsic linguistic characteristic is still one of the limiting factors when applying these models to exact sciences. In this article, a novel approach to use General Pre-Trained Transformers to assist in the design phase of data acquisition systems will be presented. The solution is packaged in the form of an application that retains the conversational aspects of LLMs, in such a manner that the user must provide details on the desired project in order for the model to draft both a system-level architectural diagram and the block-level specifications, following a Top-Down methodology based on restrictions. To test this tool, two distinct user emulations were used, one of which uses an additional GPT model. In total, 4 different data acquisition projects were used in the testing phase, each with its own measurement requirements: angular position, temperature, acceleration and a fourth project with both pressure and superficial temperature measurements. After 160 test iterations, the study concludes that there is potential for these models to serve adequately as synthesis/assistant tools for data acquisition systems, but there are still technological limitations. The results show coherent architectures and topologies, but that GPTs have difficulties in simultaneously considering all requirements and many times commits theoretical mistakes.


Leveraging Large Language Models for Concept Graph Recovery and Question Answering in NLP Education

Yang, Rui, Yang, Boming, Ouyang, Sixun, She, Tianwei, Feng, Aosong, Jiang, Yuang, Lecue, Freddy, Lu, Jinghui, Li, Irene

arXiv.org Artificial Intelligence

In the domain of Natural Language Processing (NLP), Large Language Models (LLMs) have demonstrated promise in text-generation tasks. However, their educational applications, particularly for domain-specific queries, remain underexplored. This study investigates LLMs' capabilities in educational scenarios, focusing on concept graph recovery and question-answering (QA). We assess LLMs' zero-shot performance in creating domain-specific concept graphs and introduce TutorQA, a new expert-verified NLP-focused benchmark for scientific graph reasoning and QA. TutorQA consists of five tasks with 500 QA pairs. To tackle TutorQA queries, we present CGLLM, a pipeline integrating concept graphs with LLMs for answering diverse questions. Our results indicate that LLMs' zero-shot concept graph recovery is competitive with supervised methods, showing an average 3% F1 score improvement. In TutorQA tasks, LLMs achieve up to 26% F1 score enhancement. Moreover, human evaluation and analysis show that CGLLM generates answers with more fine-grained concepts.


A Latent Dirichlet Allocation (LDA) Semantic Text Analytics Approach to Explore Topical Features in Charity Crowdfunding Campaigns

Muzumdar, Prathamesh, Kurian, George, Basyal, Ganga Prasad

arXiv.org Artificial Intelligence

Crowdfunding in the realm of the Social Web has received substantial attention, with prior research examining various aspects of campaigns, including project objectives, durations, and influential project categories for successful fundraising. These factors are crucial for entrepreneurs seeking donor support. However, the terrain of charity crowdfunding within the Social Web remains relatively unexplored, lacking comprehension of the motivations driving donations that often lack concrete reciprocation. Distinct from conventional crowdfunding that offers tangible returns, charity crowdfunding relies on intangible rewards like tax advantages, recognition posts, or advisory roles. Such details are often embedded within campaign narratives, yet, the analysis of textual content in charity crowdfunding is limited. This study introduces an inventive text analytics framework, utilizing Latent Dirichlet Allocation (LDA) to extract latent themes from textual descriptions of charity campaigns. The study has explored four different themes, two each in campaign and incentive descriptions. Campaign description themes are focused on child and elderly health mainly the ones who are diagnosed with terminal diseases. Incentive description themes are based on tax benefits, certificates, and appreciation posts. These themes, combined with numerical parameters, predict campaign success. The study was successful in using Random Forest Classifier to predict success of the campaign using both thematic and numerical parameters. The study distinguishes thematic categories, particularly medical need-based charity and general causes, based on project and incentive descriptions. In conclusion, this research bridges the gap by showcasing topic modelling utility in uncharted charity crowdfunding domains.


Using Text Classification with a Bayesian Correction for Estimating Overreporting in the Creditor Reporting System on Climate Adaptation Finance

Borst, Janos, Wencker, Thomas, Niekler, Andreas

arXiv.org Artificial Intelligence

There is international consensus on the need to respond to the global threat posed by climate change (Paris Accord, Article 2). Development funds are essential to finance climate change adaptation and are thus an important part of international climate policy. The 2009 Copenhagen Accord (UNFCCC, 2009) aimed to mobilize USD 100 billion by 2020. Implementation of climate change adaptation measures is one of five targets set to reach the 13th Sustainable Development Goal (SDG): "Take urgent action to combat climate change and its impacts". The Creditor Reporting System (CRS), maintained by the OECD Development Assistance Committee (DAC), monitors adaptation finance flows from OECD DAC member countries to developing countries. One of the challenges in ensuring valid reporting - or at least comparable figures - across reporting agencies is that the agreements mentioned above lack indicators. To this end, the OECD DAC established in 2009 the Rio markers on climate change adaptation (CCA). For each aid activity, donors report whether it contributes to CCA, i.e. reducing "the vulnerability of human or natural systems to the current and expected impacts of climate change, including climate variability, by maintaining or increasing resilience, through increased ability to adapt to, or absorb, climate change stresses, shocks and variability and/or by helping reduce exposure to them" (OECD DAC, 2022, p. 4). Activities are eligible for a marker if "a) the climate change adaptation objective is explicitly indicated in the activity documentation; and b) the activity contains specific measures targeting the definition above."


Intel, NSF Name Winners of Wireless Machine Learning Research Funding – IAM Network

#artificialintelligence

Intel and the National Science Foundation (NSF), joint funders of the Machine Learning for Wireless Networking Systems (MLWiNS) program, today announced recipients of awards for research projects into ultra-dense wireless systems that deliver the throughput, latency and reliability requirements of future applications – including distributed machine learning computations over wireless edge networks. Institutions: University of Illinois Urbana-Champaign and University of Washington Project Leads: Pramod Viswanath (University of Illinois Urbana-Champaign) and Sewoong Oh (University of Washington) Project Description: This project will use deep learning applications in the physical layer of communications systems, which will enable researchers to: 1) study the operation of new neural-network based, nonlinear channel codes through jointly trained encoders and decoders, 2) integrate information-theory, which can reduce the number of parameters to be learned and improve the training efficiency of communication systems, to create non-linear codes in feedback channels, and 3) design a family of non-linear neural codes for interference networks.


Comfort, Interaction and Efficiency: Artificial Intelligence in Architectural Projects

#artificialintelligence

The incorporation of new technologies into architectural designs has been expanding design possibilities over the last few years. Automation in construction processes can be used both in large scale city strategies, and smaller-scale demands like in the construction of residences. One of the more recent ways that technology has been integrated into the design of workplaces is through the incorporation of artificial intelligence, which uses data that can "teach" the machines how to work in several levels of autonomy. The way that artificial intelligence can be incorporated into the daily function of the workplaces depends on the type and amount of data used to fulfill the projects, and how it can contribute to the evaluating the efficiency of construction, simulation of human movement reflected in the drawings, structural calculations, and other design opportunities. Here, we've compiled a short list of projects that effectively utilize artificial intelligence below: Philips Lighting Headquarters located in Eindhoven, Netherlands, takes advantage of understanding how lighting could become a center point of a design project.